8,600 research outputs found

    Approaches to Semantic Web Services: An Overview and Comparison

    Get PDF
    Abstract. The next Web generation promises to deliver Semantic Web Services (SWS); services that are self-described and amenable to automated discovery, composition and invocation. A prerequisite to this, however, is the emergence and evolution of the Semantic Web, which provides the infrastructure for the semantic interoperability of Web Services. Web Services will be augmented with rich formal descriptions of their capabilities, such that they can be utilized by applications or other services without human assistance or highly constrained agreements on interfaces or protocols. Thus, Semantic Web Services have the potential to change the way knowledge and business services are consumed and provided on the Web. In this paper, we survey the state of the art of current enabling technologies for Semantic Web Services. In addition, we characterize the infrastructure of Semantic Web Services along three orthogonal dimensions: activities, architecture and service ontology. Further, we examine and contrast three current approaches to SWS according to the proposed dimensions

    Marginal and weakly nonlinear stability in spatially developing flows

    Get PDF
    AbstractThis work is devoted to revealing the essence of near-critical phenomena in nonlinear problems with nonparallel effects. As a generalization of the well-known concept of linear stability in Fourier space for a parallel basic state, we introduce a new concept valid for nonparallel flows as well. The new picture allows one to demonstrate the possible singular limit to the parallel case. Also, on its basis we derive a weakly nonlinear model valid near criticality. The damped Kuramoto-Sivashinsky equation with variable coefficients is used to illustrate the application of the theory

    APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters

    Full text link
    We describe herein the APElink+ board, a PCIe interconnect adapter featuring the latest advances in wire speed and interface technology plus hardware support for a RDMA programming model and experimental acceleration of GPU networking; this design allows us to build a low latency, high bandwidth PC cluster, the APEnet+ network, the new generation of our cost-effective, tens-of-thousands-scalable cluster network architecture. Some test results and characterization of data transmission of a complete testbench, based on a commercial development card mounting an Altera FPGA, are provided.Comment: 6 pages, 7 figures, proceeding of CHEP 2010, Taiwan, October 18-2

    Entropy production at all scales

    Get PDF
    Spatially homogeneous systems are characterized by the simultaneous presence of a wide range of time scales. When the dynamics of such reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with large gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the governing ordinary differential equations using the G-Scheme framework. The G-Scheme assumes that the dynamics is decomposed into active, slow, fast, and when applicable, invariant subspaces. We derive the expressions that express the direct link between time scales and entropy production by resorting to the estimates provided by the G-Scheme. With reference to a constant volume, adiabatic batch reactor, we compute the contribution to entropy production by the four subspaces. The numerical experiments show that, as indicated by the theoretical derivation, the contribution to entropy production of the fast subspace is of the same magnitude of the error threshold chosen for the numerical integration, and that the contribution of the slow subspace is generally much smaller than that of the active subspace

    Entropy production and the g-scheme

    Get PDF
    Spatially homogeneous batch reactor systems are characterized by the simultaneous presence of a wide range of time scales. When the dynamics of such reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with large gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the governing ordinary differential equations using the G-Scheme framework. The G- Scheme assumes that the dynamics is decomposed into active, slow, fast, and when applicable, invariant subspaces. We computed the contribution to entropy production by the four subspaces, with reference to a constant volume, adiabatic reactor. The numerical experiments indicate that the contributions of the fast and slow subspaces are much smaller than that of the active subspace

    Wearable inertial sensors for human movement analysis

    Get PDF
    Introduction: The present review aims to provide an overview of the most common uses of wearable inertial sensors in the field of clinical human movement analysis.Areas covered: Six main areas of application are analysed: gait analysis, stabilometry, instrumented clinical tests, upper body mobility assessment, daily-life activity monitoring and tremor assessment. Each area is analyzed both from a methodological and applicative point of view. The focus on the methodological approaches is meant to provide an idea of the computational complexity behind a variable/parameter/index of interest so that the reader is aware of the reliability of the approach. The focus on the application is meant to provide a practical guide for advising clinicians on how inertial sensors can help them in their clinical practice.Expert commentary: Less expensive and more easy to use than other systems used in human movement analysis, wearable sensors have evolved to the point that they can be considered ready for being part of routine clinical routine

    Relationship between molecular connectivity and carcinogenic activity: a confirmation with a new software program based on graph theory.

    Get PDF
    For a database of 826 chemicals tested for carcinogenicity, we fragmented the structural formula of the chemicals into all possible contiguous-atom fragments with size between two and eight (nonhydrogen) atoms. The fragmentation was obtained using a new software program based on graph theory. We used 80% of the chemicals as a training set and 20% as a test set. The two sets were obtained by random sorting. From the training sets, an average (8 computer runs with independently sorted chemicals) of 315 different fragments were significantly (p < 0.125) associated with carcinogenicity or lack thereof. Even using this relatively low level of statistical significance, 23% of the molecules of the test sets lacked significant fragments. For 77% of the molecules of the test sets, we used the presence of significant fragments to predict carcinogenicity. The average level of accuracy of the predictions in the test sets was 67.5%. Chemicals containing only positive fragments were predicted with an accuracy of 78.7%. The level of accuracy was around 60% for chemicals characterized by contradictory fragments or only negative fragments. In a parallel manner, we performed eight paired runs in which carcinogenicity was attributed randomly to the molecules of the training sets. The fragments generated by these pseudo-training sets were devoid of any predictivity in the corresponding test sets. Using an independent software program, we confirmed (for the complex biological endpoint of carcinogenicity) the validity of a structure-activity relationship approach of the type proposed by Klopman and Rosenkranz with their CASE program
    • …
    corecore